FYI, GPT-4 can solve reCAPTCHA by multi-modal. We assume the attack vector as 1 person and 1 ID.
1. Manipulation of on-chain activity. Since the crowd-workers are comparing each address's on-chain activity, there is a risk that the attacker could manipulate a specific address's activity to influence the comparison task. For example, they could send transactions to certain addresses to make them appear more similar or dissimilar to each other. Please see "Reverse Engineering the Social Graph Oracle." Assume the attacker is going to make a project in Gitcoin Grants. Due to the nature of QF, if you create your own project and donate to it yourself, the project will receive more than the amount you donate (the budget for the attack). If plural QF (which adjusts voting power based on the voter's position in the Social Graph) is employed, there is an incentive to manipulate the "attacker's wallet" to where it will have the most voting power. Projects supported by the broader community receive larger amounts from the Matching Pool. In the case of GR15, the attacker collects the wallet addresses of those who donated in the GR15 round and calculates a pseudo-social graph oracle that can be output from DeCartography, which can then be used to map the (relative) social distance of those who have already donated. Create multiple accounts and use the address where you have (and can manipulate) the private key (at about the same cost as a bona fide user would create) Gitcoin Passport to gain some Humanity. Assuming bona fide users, it has already taken some time and money to create "wallet activity for those who have already donated." The cost of forging the social graph (by the attacker) over half of the existing bona fide users is not worth it considering the benefits the project will ultimately gain from this attack.
2. Sybil attacks are already WIP by Anti-Sybil Legos.
3. Bribery or collusion. An attacker could bribe or collude with a group of crowd-workers to bias the results of the task (vote buying to a validator aka crowd-worker). We call it legitimized collusion as “plutocracy". Fundamentally, the problem is to buy accounts and take away decision making. Even if the questions are random, the questions are monitored and stored continuously by Multi-modal, and the same AI agent is assumed to be able to operate the account to which he/she has control. Only when the same question is asked, the AI agent will slightly manipulate the question in the direction it wants to go (this is a concept similar to "deliberately making a mistake," but there is no "right/correct" answer to this problem). This structure looks like "The P + epsilon Attack,” and you can see more details on this link: https://blog.ethereum.org/2015/01/28/p-epsilon-attack/. We assume simple Schelling coin game, which is powered by crowd-workers. Please see the following table for its payoff matrix. stable: honest
you vote 0 you vote 1
others vote 0 P 0
others vote 1 0 P
Now, the attack. Suppose that the attacker credibly commits to pay out X to voters who voted 1 after the game is over, where X = P + ε if the majority votes 0, and X = 0 if the majority votes 1. The payoff matrix looks like this:
stable: attack
you vote 0 you vote 1
others vote 0 P P + ε
others vote 1 0 P
Thus, it's a dominant strategy for anyone to vote 1 no matter what you think the majority will do. Hence, assuming the system is not dominated by altruists, the majority will vote 1, and so the attacker will not need to pay anything at all. The attack has successfully managed to take over the mechanism at zero cost. Regardless of the predictable majority, if you vote for the bribe that has already been confirmed for payment now, you will be rewarded "whichever way it goes." Previous companies, interviews, and appearances related to oneself.
Salvaging Schelling Schemes
There are a few avenues that one can take to try to salvage the Schelling mechanism. One approach is that instead of round N of the Schelling consensus itself deciding who gets rewarded based on the "majority is right" principle, we use round N + 1 to determine who should be rewarded during round N, with the default equilibrium being that only people who voted correctly during round N (both on the actual fact in question and on who should be rewarded in round N - 1) should be rewarded. Theoretically, this requires an attacker wishing to perform a cost-free attack to corrupt not just one round, but also all future rounds, making the required capital deposit that the attacker must make unbounded.
However, this approach has two flaws. First, the mechanism is fragile: if the attacker manages to corrupt some round in the far future by actually paying up P + ε to everyone, regardless of who wins, then the expectation of that corrupted round causes an incentive to cooperate with the attacker to back-propagate to all previous rounds. Hence, corrupting one round is costly, but corrupting thousands of rounds is not much more costly.
Second, because of discounting, the required deposit to overcome the scheme does not need to be infinite; it just needs to be very very large (ie. inversely proportional to the prevailing interest rate). But if all we want is to make the minimum required bribe larger, then there exists a much simpler and better strategy for doing so, pioneered by Paul Storcz: require participants to put down a large deposit, and build in a mechanism by which the more contention there is, the more funds are at stake. At the limit, where slightly over 50% of votes are in favor of one outcome and 50% in favor of the other, the entire deposit it taken away from minority voters. This ensures that the attack still works, but the bribe must now be greater than the deposit (roughly equal to the payout divided by the discounting rate, giving us equal performance to the infinite-round game) rather than just the payout for each round. Hence, in order to overcome such a mechanism, one would need to be able to prove that one is capable of pulling off a 51% attack, and perhaps we may simply be comfortable with assuming that attackers of that size do not exist.
Incorporate a system where the more conflict (the more likely it is to be broken), the more funds are at risk.
But the way to prevent this is by "raising the minimum Staking amount" + "if it's 51% vs 49%, Slash the latter in full".
I heard this is explained in the TruthCoin whitepaper Previous companies, interviews and appearances related to oneself.icon. https://gyazo.com/b45dbf4d9e67d5017eea7edc8d412cd8
Another approach is to rely on counter-coordination; essentially, somehow coordinate, perhaps via credible commitments, on voting A (if A is the truth) with probability 0.6 and B with probability 0.4, the theory being that this will allow users to (probabilistically) claim the mechanism's reward and a portion of the attacker's bribe at the same time.
No idea yet on UI to implement "weighting" of votes by crowd workers.Previous companies, interviews and appearances related to oneself.icon
This (seems to) work particularly well in games where instead of paying out a constant reward to each majority-compliant voter, the game is structured to have a constant total payoff, adjusting individual payoffs to accomplish this goal is needed. In such situations, from a collective-rationality standpoint it is indeed the case that the group earns a highest profit by having 49% of its members vote B to claim the attacker's reward and 51% vote A to make sure the attacker's reward is paid out.
https://gyazo.com/3a565eea636e58beaa4d286a118dab8c
The truth is B, and most participants are honestly following the standard protocol through which the contract discovers that the truth is B, but 20% are attackers or accepted a bribe.
The truth is A, but 80% of participants are attackers or accepted a bribe to pretend that the truth is B.
Hence, subjectivity. The power behind subjectivity lies in the fact that concepts like manipulation, takeovers and deceit, not detectable or in some cases even definable in pure cryptography, can be understood by the human community surrounding the protocol just fine. To see how subjectivity may work in action, let us jump straight to an example. The example supplied here will define a new, third, hypothetical form of blockchain or DAO governance, which can be used to complement futarchy and democracy: subjectivocracy. Pure subjectivocracy is defined quite simply:
If everyone agrees, go with the unanimous decision.
If there is a disagreement, say between decision A and decision B, split the blockchain/DAO into two forks, where one fork implements decision A and the other implements decision B.
All forks are allowed to exist; it's left up to the surrounding community to decide which forks they care about. Subjectivocracy is in some sense the ultimate non-coercive form of governance; no one is ever forced to accept a situation where they don't get their own way, the only catch being that if you have policy preferences that are unpopular then you will end up on a fork where few others are left to interact with you. Perhaps, in some futuristic society where nearly all resources are digital and everything that is material and useful is too-cheap-to-meter, subjectivocracy may become the preferred form of government; but until then the cryptoeconomy seems like a perfect initial use case.
For another example, we can also see how to apply subjectivocracy to SchellingCoin. First, let us define our "objective" version of SchellingCoin for comparison's sake:
The SchellingCoin mechanism has an associated sub-currency.
Anyone has the ability to "join" the mechanism by purchasing units of the currency and placing them as a security deposit. Weight of participation is proportional to the size of the deposit, as usual.
Anyone has the ability to ask the mechanism a question by paying a fixed fee in that mechanism's currency.
For a given question, all voters in the mechanism vote either A or B.
Everyone who voted with the majority gets a share of the question fee; everyone who voted against the majority gets nothing.
Note that, as mentioned in the post on P + epsilon attacks, there is a refinement by Paul Sztorc under which minority voters lose some of their coins, and the more "contentious" a question becomes the more coins minority voters lose, right up to the point where at a 51/49 split the minority voters lose all their coins to the majority. This substantially raises the bar for a P + epsilon attack. However, raising the bar for us is not quite good enough; here, we are interested in having no exploitability (once again, we formally define "exploitability" as "the protocol provides intrinsic opportunities for profitable attacks") at all. So, let us see how subjectivity can help. We will elide unchanged details:
For a given question, all voters in the mechanism vote either A or B.
If everyone agrees, go with the unanimous decision and reward everyone.
If there is a disagreement, split the mechanism into two on-chain forks, where one fork acts as if it chose A, rewarding everyone who voted A, and the other fork acts as if it chose B, rewarding everyone who voted B.
1. a large amount of Staking is required for the challenge (50p)
2. fork in two pieces
3. everyone votes on the challenge. If the result is the same as last time (challenge failed), full Slash; if not, you get 100P ("he who warns is the best")
If those who voted for other non-challenges vote for the majority, they lose 5P; if they vote for the minority, they lose 10P
Here is the equivalence: exponential subjective scoring. In ESS, the "score" that a client attaches to a fork depends not just on the total work done on the fork, but also on the time at which the fork appeared; forks that come later are explicitly penalized. Hence, the set of always-online users can see that a given fork came later, and therefore that it is a hostile attack, and so they will refuse to mine on it even if its proof of work chain grows to have much more total work done on it. Their incentive to do this is simple: they expect that eventually the attacker will give up, and so they will continue mining and eventually overtake the attacker, making their fork the universally accepted longest one again; hence, mining on the original fork has an expected value of 25 BTC and mining on the attacking fork has an expected value of zero. Additionally, we may note that, unlike Ripple consensus, the hybrid model is still compatible with the idea of blockchains "talking" to each other by containing a minimal "light" implementation of each other's protocols. The reason is that, while the scoring mechanism is not "absolute" from the point of view of a node without history suddenly looking at every block, it is perfectly sufficient from the point of view of an entity that remains online over a long period of time, and a blockchain certainly is such an entity.
The only solution to this within the delegation paradigm is to make it extremely risky to dole out signing privileges to untrusted parties; the simplest approach is to modify Slasher to require a large deposit, and slash the deposit as well as the reward in the event of double-signing.
Verify depth of signature
block_number, uncle_hash - this transaction is valid if (1) the block with the given uncle_hash has already been validated, (2) the block with the given uncle_hash has the given block number, and (3) the parent of that uncle is either in the main chain or was included earlier as an uncle. During the act of processing this transaction, if addresses that double-signed at that height are detected, they are appropriately penalized. block_number, uncle_parent_hash, vtx - this transaction is valid if (1) the block with the given uncle_parent_hash has already been validated, (2) the given virtual transaction is valid at the given block height with the state at the end of uncle_parent_hash, and (3) the virtual transaction shows a signature by an address which also signed a block at the given block_number in the main chain. This transaction penalizes that one address. https://gyazo.com/ce6fa50f2be5930673c24089941cdd21
I'm not sure if there are any collusion smart contracts (i.e., bribes) going on in other places, but I see the signatures from previous companies, interviews, and appearances related to oneself. And then kill the mastermind?